297 research outputs found

    Random gradient-free minimization of convex functions

    Get PDF
    In this paper, we prove the complexity bounds for methods of Convex Optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. This conclusion is true both for nonsmooth and smooth problems. For the later class, we present also an accelerated scheme with the expected rate of convergence O(n[ exp ]2 /k[ exp ]2), where k is the iteration counter. For Stochastic Optimization, we propose a zero-order scheme and justify its expected rate of convergence O(n/k[ exp ]1/2). We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, both for smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments.convex optimization, stochastic optimization, derivative-free methods, random methods, complexity bounds

    Efficiency of coordinate descent methods on huge-scale optimization problems

    Get PDF
    In this paper we propose new methods for solving huge-scale optimization problems. For problems of this size, even the simplest full-dimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. Surprisingly enough, for certain classes of objective functions, our results are better than the standard worst-case bounds for deterministic algorithms. We present constrained and unconstrained versions of the method, and its accelerated variant. Our numerical test confirms a high efficiency of this technique on problems of very big size.Convex optimization, coordinate relaxation, worst-case efficiency estimates, fast gradient schemes, Google problem

    Smoothness parameter of power of Euclidean norm

    Full text link
    In this paper, we study derivatives of powers of Euclidean norm. We prove their H\"older continuity and establish explicit expressions for the corresponding constants. We show that these constants are optimal for odd derivatives and at most two times suboptimal for the even ones. In the particular case of integer powers, when the H\"older continuity transforms into the Lipschitz continuity, we improve this result and obtain the optimal constants.Comment: J Optim Theory Appl (2020

    Computationally efficient approximations of the joint spectral radius

    Full text link
    The joint spectral radius of a set of matrices is a measure of the maximal asymptotic growth rate that can be obtained by forming long products of matrices taken from the set. This quantity appears in a number of application contexts but is notoriously difficult to compute and to approximate. We introduce in this paper a procedure for approximating the joint spectral radius of a finite set of matrices with arbitrary high accuracy. Our approximation procedure is polynomial in the size of the matrices once the number of matrices and the desired accuracy are fixed

    Tensor Methods for Minimizing Convex Functions with H\"{o}lder Continuous Higher-Order Derivatives

    Full text link
    In this paper we study pp-order methods for unconstrained minimization of convex functions that are pp-times differentiable (p2p\geq 2) with ν\nu-H\"{o}lder continuous ppth derivatives. We propose tensor schemes with and without acceleration. For the schemes without acceleration, we establish iteration complexity bounds of O(ϵ1/(p+ν1))\mathcal{O}\left(\epsilon^{-1/(p+\nu-1)}\right) for reducing the functional residual below a given ϵ(0,1)\epsilon\in (0,1). Assuming that ν\nu is known, we obtain an improved complexity bound of O(ϵ1/(p+ν))\mathcal{O}\left(\epsilon^{-1/(p+\nu)}\right) for the corresponding accelerated scheme. For the case in which ν\nu is unknown, we present a universal accelerated tensor scheme with iteration complexity of O(ϵp/[(p+1)(p+ν1)])\mathcal{O}\left(\epsilon^{-p/[(p+1)(p+\nu-1)]}\right). A lower complexity bound of O(ϵ2/[3(p+ν)2])\mathcal{O}\left(\epsilon^{-2/[3(p+\nu)-2]}\right) is also obtained for this problem class.Comment: arXiv admin note: text overlap with arXiv:1907.0705

    Online analysis of epidemics with variable infection rate

    Full text link
    In this paper, we continue development of the new epidemiological model HIT, which is suitable for analyzing and predicting the propagation of COVID-19 epidemics. This is a discrete-time model allowing a reconstruction of the dynamics of asymptomatic virus holders using the available daily statistics on the number of new cases. We suggest to use a new indicator, the total infection rate, to distinguish the propagation and recession modes of the epidemic. We check our indicator on the available data for eleven different countries and for the whole world. Our reconstructions are very precise. In several cases, we are able to detect the exact dates of the disastrous political decisions, ensuring the second wave of the epidemics. It appears that for all our examples the decisions made on the basis of the current number of new cases are wrong. In this paper, we suggest a reasonable alternative. Our analysis shows that all tested countries are in a dangerous zone except Sweden.Comment: 24 pages, 13 figure

    A Subgradient Method for Free Material Design

    Get PDF
    A small improvement in the structure of the material could save the manufactory a lot of money. The free material design can be formulated as an optimization problem. However, due to its large scale, second-order methods cannot solve the free material design problem in reasonable size. We formulate the free material optimization (FMO) problem into a saddle-point form in which the inverse of the stiffness matrix A(E) in the constraint is eliminated. The size of A(E) is generally large, denoted as N by N. This is the first formulation of FMO without A(E). We apply the primal-dual subgradient method [17] to solve the restricted saddle-point formula. This is the first gradient-type method for FMO. Each iteration of our algorithm takes a total of O(N2)O(N^2) foating-point operations and an auxiliary vector storage of size O(N), compared with formulations having the inverse of A(E) which requires O(N3)O(N^3) arithmetic operations and an auxiliary vector storage of size O(N2)O(N^2). To solve the problem, we developed a closed-form solution to a semidefinite least squares problem and an efficient parameter update scheme for the gradient method, which are included in the appendix. We also approximate a solution to the bounded Lagrangian dual problem. The problem is decomposed into small problems each only having an unknown of k by k (k = 3 or 6) matrix, and can be solved in parallel. The iteration bound of our algorithm is optimal for general subgradient scheme. Finally we present promising numerical results.Comment: SIAM Journal on Optimization (accepted

    Double smoothing technique for infinite-dimensional optimization problems with applications to optimal control

    Get PDF
    In this paper, we propose an efficient technique for solving some infinite-dimensional problems over the sets of functions of time. In our problem, besides the convex point-wise constraints on state variables, we have convex coupling constraints with finite-dimensional image. Hence, we can formulate a finite-dimensional dual problem, which can be solved by efficient gradient methods. We show that it is possible to reconstruct an approximate primal solution. In order to accelerate our schemes, we apply double-smoothing technique. As a result, our method has complexity O (1/[epsilon] ln 1/[epsilon]) gradient iterations, where [epsilon] is the desired accuracy of the solution of the primal-dual problem. Our approach covers, in particular, the optimal control problems with trajectory governed by a system of ordinary differential equations. The additional requirement could be that the trajectory crosses in certain moments of time some convex sets.convex optimization, optimal control, fast gradient methods, complexity bounds, smoothing technique

    First-order methods of smooth convex optimization with inexact oracle

    Get PDF
    In this paper, we analyze different first-order methods of smooth convex optimization employing inexact first-order information. We introduce the notion of an approximate first-order oracle. The list of examples of such an oracle includes smoothing technique, Moreau-Yosida regularization, Modified Lagrangians, and many others. For different methods, we derive complexity estimates and study the dependence of the desired accuracy in the objective function and the accuracy of the oracle. It appears that in inexact case, the superiority of the fast gradient methods over the classical ones is not anymore absolute. Contrary to the simple gradient schemes, fast gradient methods necessarily suffer from accumulation of errors. Thus, the choice of the method depends both on desired accuracy and accuracy of the oracle. We present applications of our results to smooth convex-concave saddle point problems, to the analysis of Modified Lagrangians, to the prox-method, and some others.smooth convex optimization, first-order methods, inexact oracle, gradient methods, fast gradient methods, complexity bounds
    corecore